2 research outputs found

    Semi-automatic Tracking of the Hyoid bone and the Epiglottis Movements in Digital Videofluoroscopic Images

    Get PDF
    Swallowing is a process that happens hundreds of times per day during eating, drinking, or swallowing saliva. Dysphagia is an abnormality in any stage of the swallowing process. It can cause serious problems such as dehydration and respiratory infection. In order to help dysphasic patients, radiologists need to evaluate the patient’s swallowing ability, usually using Video Fluoroscopic Swallowing Study (VFSS). During the assessment, several measurements are taken and evaluated, such as the displacement of the hyoid bone and epiglottis. Usually radiologists perform evaluation by means of visual inspection, which is a time consuming process that produces subjective results. Previous research has made strides automating swallowing measurements in order to produce objective results, but there is no study that automatically tracks the movement of the epiglottis. This thesis presents a design and implementation of a Computer Aided Diagnosis (CAD) system that can automatically track the movement of the hyoid bone and the epiglottis using minimal user input. The correlation between these two movements will be studied. With the aid of this system, radiologists can more reliably and efficiently take measurements and evaluate the health of the swallowing process

    Range Flow: New Algorithm Design and Quantitative and Qualitative Analysis

    Get PDF
    Optical flow computation is one of the oldest and most active research fields in computer vision and image processing. It encompasses the following areas: motion estimation, video compression, object detection and tracking, image dominant plane extraction, movement detection, robot navigation, visual odometry, traffic analysis, and vehicle tracking. Optical flow methods calculate the motion between two image frames. In 2D images, optical flow specifies how far each pixel moves between adjacent frames; in 3D images, it specifies how much each voxel moves between adjacent volumes in the dataset. Since 1980, several algorithms have successfully estimated 2D and 3D optical flow. Notably, scene flow and range flow are special cases of 3D optical flow. Scene flow is the 3D optical flow of pixels on a moving surface. Scene flow uses disparity and disparity gradient maps computed from a stereo sequence and the 2D optical flow of the left and right images in the stereo sequence to compute 3D motion. Range flow is similar to scene flow, but is calculated from depth map sequences or range datasets. There is clear overlap between the algorithms that compute scene flow and range flow. Therefore, we propose new insights that can help range flow algorithms to advance to the next stage. We propose new insights into range flow algorithms by enhancing them to allow large displacements using a hierarchical framework with warping technique. We applied robust statistical formulations to generate robust and dense flow to overcome motion discontinuities and reduce the outliers. Overall, this thesis focuses on the estimation of 2D optical flow and 3D range flow using several algorithms. In addition, we studied depth data gained from different sensors and cameras. These cameras provided RGB-D data that allowed us to compute 3D range flow in two ways: using depth data only, or by combining intensity with depth data to improve the flow. We implemented well-known local approaches LK [1] and global HS [2]algorithms and recast them in the proposed framework to estimate 2D and 3D range flow [3]. Furthermore, combining local and global algorithm (CLG) proposed by Bruhn et al. [4,5] as well as Brox et al. [6] method are implemented to estimate 2D optical flow and 3D range flow. We tested and evaluated these implemented approaches both qualitatively and quantitatively in two different motions (translation and divergence) using several real datasets acquired using Kinect V2, ZED camera, and iPhone X (front and rear) Cameras. We found that CLG and Brox methods gave the best results in our datasets using Kinect V2, ZED and front camera in iPhone X sequences
    corecore